- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources1
- Resource Type
-
0000000001000000
- More
- Availability
-
01
- Author / Contributor
- Filter by Author / Creator
-
-
Greene, Michelle R (1)
-
Hart, Jennifer A (1)
-
Josyula, Mariam (1)
-
Si, Wentao (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
& Aina, D.K. Jr. (0)
-
& Akcil-Okan, O. (0)
-
& Akuom, D. (0)
-
& Aleven, V. (0)
-
& Andrews-Larson, C. (0)
-
& Archibald, J. (0)
-
& Arnett, N. (0)
-
& Arya, G. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Automatic scene classification has applications ranging from urban planning to autonomous driving, yet little is known about how well these systems work across social differences. We investigate explicit and implicit biases in deep learning architectures, including deep convolutional neural networks (dCNNs) and multimodal large language models (MLLMs). We examined nearly one million images from user-submitted photographs and Airbnb listings from over 200 countries as well as all 3320 US counties. To isolate scene-specific biases, we ensured no people were in any of the photos. We found significant explicit socioeconomic biases across all models, including lower classification accuracy, higher classification uncertainty, and increased tendencies to assign labels that could be offensive when applied to homes (e.g., “slum”) in images from homes with lower socioeconomic status. We also found significant implicit biases, with pictures from lower socioeconomic conditions more aligned with word embeddings from negative concepts. All trends were consistent across countries and within the diverse economic and racial landscapes of the United States. This research thus demonstrates a novel bias in computer vision, emphasizing the need for more inclusive and representative training datasets.more » « lessFree, publicly-accessible full text available December 1, 2026
An official website of the United States government
